723 research outputs found

    Labor Union and Linguistic Attributes in Firm Disclosure

    Get PDF
    Little research examines managers’ language itself in the presence of labor unions, especially using a rich communication channel such as earnings conference calls. By disentangling the two latent components of linguistic complexity (i.e. information and obfuscation) using conference call transcripts, I find that firms with stronger labor unions tend to disclose less information and, surprisingly, employ less obfuscation. However, the negative relation between obfuscation and union strength is driven by the loss firms subsample, indicating that the strategic obfuscation of negative news is less likely for firms with a powerful labor union in order to be forthcoming about negative information to gain bargaining power. Furthermore, I document that unionized firms tend to disclose less forward-looking information, and use more negative words in their narratives. This study provides a comprehensive view on the nuanced linguistic styles and contents via which firms react to labor unions

    Generalized Bayesian Multidimensional Scaling and Model Comparison

    Full text link
    Multidimensional scaling is widely used to reconstruct a map with the points' coordinates in a low-dimensional space from the original high-dimensional space while preserving the pairwise distances. In a Bayesian framework, the current approach using Markov chain Monte Carlo algorithms has limitations in terms of model generalization and performance comparison. To address these limitations, a general framework that incorporates non-Gaussian errors and robustness to fit different types of dissimilarities is developed. Then, an adaptive inference method using annealed Sequential Monte Carlo algorithm for Bayesian multidimensional scaling is proposed. This algorithm performs inference sequentially in time and provides an approximate posterior distribution over the points' coordinates in a low-dimensional space and an unbiased estimator for the marginal likelihood. In this study, we compare the performance of different models based on marginal likelihoods, which are produced as a byproduct of the adaptive annealed Sequential Monte Carlo algorithm. Using synthetic and real data, we demonstrate the effectiveness of the proposed algorithm. Our results show that the proposed algorithm outperforms other benchmark algorithms under the same computational budget based on common metrics used in the literature. The implementation of our proposed method and applications are available at https://github.com/nunujiarui/GBMDS

    Quantum-dot gain without inversion:Effects of dark plasmon-exciton hybridization

    Get PDF
    We propose an initial-state-dependent quantum-dot gain without population inversion in the vicinity of a resonant metallic nanoparticle. The gain originates from the hybridization of a dark plasmon-exciton and is accompanied by efficient energy transfer from the nanoparticle to the quantum dot. This hybridization of the dark plasmon-exciton, attached to the hybridization of the bright plasmon-exciton, strengthens nonlinear light-quantum emitter interactions at the nanoscale, thus the spectral overlap between the dark and the bright plasmons enhances the gain effect. This hybrid system has potential applications in ultracompact tunable quantum devices.Physics, Condensed MatterSCI(E)[email protected]

    Deep Landscape Forecasting for Real-time Bidding Advertising

    Full text link
    The emergence of real-time auction in online advertising has drawn huge attention of modeling the market competition, i.e., bid landscape forecasting. The problem is formulated as to forecast the probability distribution of market price for each ad auction. With the consideration of the censorship issue which is caused by the second-price auction mechanism, many researchers have devoted their efforts on bid landscape forecasting by incorporating survival analysis from medical research field. However, most existing solutions mainly focus on either counting-based statistics of the segmented sample clusters, or learning a parameterized model based on some heuristic assumptions of distribution forms. Moreover, they neither consider the sequential patterns of the feature over the price space. In order to capture more sophisticated yet flexible patterns at fine-grained level of the data, we propose a Deep Landscape Forecasting (DLF) model which combines deep learning for probability distribution forecasting and survival analysis for censorship handling. Specifically, we utilize a recurrent neural network to flexibly model the conditional winning probability w.r.t. each bid price. Then we conduct the bid landscape forecasting through probability chain rule with strict mathematical derivations. And, in an end-to-end manner, we optimize the model by minimizing two negative likelihood losses with comprehensive motivations. Without any specific assumption for the distribution form of bid landscape, our model shows great advantages over previous works on fitting various sophisticated market price distributions. In the experiments over two large-scale real-world datasets, our model significantly outperforms the state-of-the-art solutions under various metrics.Comment: KDD 2019. The reproducible code and dataset link is https://github.com/rk2900/DL

    Investigating the relevance of major signaling pathways in cancer survival using a biologically meaningful deep learning model

    Get PDF
    BACKGROUND: Survival analysis is an important part of cancer studies. In addition to the existing Cox proportional hazards model, deep learning models have recently been proposed in survival prediction, which directly integrates multi-omics data of a large number of genes using the fully connected dense deep neural network layers, which are hard to interpret. On the other hand, cancer signaling pathways are important and interpretable concepts that define the signaling cascades regulating cancer development and drug resistance. Thus, it is important to investigate potential associations between patient survival and individual signaling pathways, which can help domain experts to understand deep learning models making specific predictions. RESULTS: In this exploratory study, we proposed to investigate the relevance and influence of a set of core cancer signaling pathways in the survival analysis of cancer patients. Specifically, we built a simplified and partially biologically meaningful deep neural network, DeepSigSurvNet, for survival prediction. In the model, the gene expression and copy number data of 1967 genes from 46 major signaling pathways were integrated in the model. We applied the model to four types of cancer and investigated the influence of the 46 signaling pathways in the cancers. Interestingly, the interpretable analysis identified the distinct patterns of these signaling pathways, which are helpful in understanding the relevance of signaling pathways in terms of their application to the prediction of cancer patients\u27 survival time. These highly relevant signaling pathways, when combined with other essential signaling pathways inhibitors, can be novel targets for drug and drug combination prediction to improve cancer patients\u27 survival time. CONCLUSION: The proposed DeepSigSurvNet model can facilitate the understanding of the implications of signaling pathways on cancer patients\u27 survival by integrating multi-omics data and clinical factors

    Corporate Lobbying and ESG Reports: Patterns among US Companies, 1999–2017

    Get PDF
    To lobby legislators, it is important for interest groups to signal their ability to help legislators win elections and provide them with policy-relevant information. We explore for-profit companies’ use of environmental, social, and governance (ESG) reports as a signaling device to promote their reputation to legislators and convey their ability to provide electoral and policymaking support, which is valuable for lobbying. To this end, we create a panel dataset by combining ESG reports issued by US companies and the same companies’ lobbying and campaign contribution records from 1999 to 2017. We expect companies to issue more ESG reports, as well as reports containing more quantitative content, when they lobby. The data conform to our expectations. We also reason that lobbying may be more strongly related to ESG reporting when it is coupled with campaign contributions made by affiliated corporate political action committees, but the data do not support this expectation

    The Entity-Deduction Arena: A playground for probing the conversational reasoning and planning capabilities of LLMs

    Full text link
    Large language models (LLMs) are effective at answering questions that are clearly asked. However, when faced with ambiguous queries they can act unpredictably and produce incorrect outputs. This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguities effectively. This capability requires complex understanding, state tracking, reasoning and planning over multiple conversational turns. However, directly measuring this can be challenging. In this paper, we offer a surrogate problem which assesses an LLMs's capability to deduce an entity unknown to itself, but revealed to a judge, by asking the judge a series of queries. This entity-deducing game can serve as an evaluation framework to probe the conversational reasoning and planning capabilities of language models. We systematically evaluate various LLMs and discover significant differences in their performance on this task. We find that strong LLMs like GPT-4 outperform human players by a large margin. We further employ Behavior Cloning (BC) to examine whether a weaker model is capable of imitating a stronger model and generalizing to data or domains, using only the demonstrations from a stronger model. We finally propose to use Reinforcement Learning to enhance reasoning and planning capacity of Vicuna models through episodes of game playing, which lead to significant performance improvement. We hope that this problem offers insights into how autonomous agents could be trained to behave more intelligently in ambiguous circumstances.Comment: 22 page
    corecore